44 research outputs found

    Non-cooperative game theoretic approaches to bilateral exchange networks.

    Get PDF
    Bilateral exchange networks are structures in which a finite set of players have a restricted framework of bargaining opportunities with each other. The key restrictions are that each player may participate in only one 'exchange' and each of these may only involve a pair of players. There is a large sociology literature which investigates these networks as a simplified model of social exchange. This literature contains many predictions and experimental results, but not a non-cooperative game theoretic analysis. The aim of the thesis is to provide this. The analysis builds on the economic theory literature on non-cooperative bar gaining, principally the alternating offers and Nash demand games. Two novel perfect information models based on the alternating offers game are considered and it is demonstrated that they suffer from several difficulties. In particular, analysis of an example network shows that for these two models multiple subgame perfect equilibria exist with considerable qualitative differences. It is argued that an alternating offers approach to the problem is therefore unlikely to be successful for general networks. Models based on Nash demand games also have multiple solutions, but their simpler structure allows investigation of equilibrium selection by evolutionary methods. An agent based evolutionary model is proposed. The results of computer simulations based on this model under a variety of learning rules are presented. For small networks the agents often converge to unique long-term outcomes which offer support both for theoretical predictions of 2 and 3 player alternating offers models and experimental results of the sociology literature. For larger networks the results become less precise and it is shown they sometimes leave the core. It is argued that a modified evolutionary model has scope for avoiding these difficulties and providing a constructive approach to the problem for large networks

    A Comparative Review of Dimension Reduction Methods in Approximate Bayesian Computation

    Get PDF
    Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.Comment: Published in at http://dx.doi.org/10.1214/12-STS406 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Variational bridge constructs for grey box modelling with Gaussian processes

    Get PDF
    Latent force models are systems whereby there is a mechanistic model describing the dynamics of the system state, with some unknown forcing term that is approximated with a Gaussian process. If such dynamics are non-linear, it can be difficult to estimate the posterior state and forcing term jointly, particularly when there are system parameters that also need estimating. This paper uses black-box variational inference to jointly estimate the posterior, designing a multivariate extension to local inverse autoregressive flows as a flexible approximater of the system. We compare estimates on systems where the posterior is known, demonstrating the effectiveness of the approximation, and apply to problems with non-linear dynamics, multi-output systems and models with non-Gaussian likelihoods

    Bayesian model comparison with un-normalised likelihoods

    Get PDF
    Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates

    Amount of Information Needed for Model Choice in Approximate Bayesian Computation

    Get PDF
    Approximate Bayesian Computation (ABC) has become a popular technique in evolutionary genetics for elucidating population structure and history due to its flexibility. The statistical inference framework has benefited from significant progress in recent years. In population genetics, however, its outcome depends heavily on the amount of information in the dataset, whether that be the level of genetic variation or the number of samples and loci. Here we look at the power to reject a simple constant population size coalescent model in favor of a bottleneck model in datasets of varying quality. Not only is this power dependent on the number of samples and loci, but it also depends strongly on the level of nucleotide diversity in the observed dataset. Whilst overall model choice in an ABC setting is fairly powerful and quite conservative with regard to false positives, detecting weaker bottlenecks is problematic in smaller or less genetically diverse datasets and limits the inferences possible in non-model organism where the amount of information regarding the two models is often limited. Our results show it is important to consider these limitations when performing an ABC analysis and that studies should perform simulations based on the size and nature of the dataset in order to fully assess the power of the study

    Bayesian Computation with Intractable Likelihoods

    Full text link
    This article surveys computational methods for posterior inference with intractable likelihoods, that is where the likelihood function is unavailable in closed form, or where evaluation of the likelihood is infeasible. We review recent developments in pseudo-marginal methods, approximate Bayesian computation (ABC), the exchange algorithm, thermodynamic integration, and composite likelihood, paying particular attention to advancements in scalability for large datasets. We also mention R and MATLAB source code for implementations of these algorithms, where they are available.Comment: arXiv admin note: text overlap with arXiv:1503.0806

    Lazy ABC

    No full text

    Adapting the ABC distance function

    No full text
    corecore